Embedding AI in Pharma: what really happens after the demo?

At The Age of AI Europe last week held at The Institution of Engineering and Technology (IET), the largest multidisciplinary professional engineering institution in the world, where I joined Raks Kantaria founder of Circuit Medical and Liz Stutz‍ ‍a Consultant in Digital Transformation in Pharma for a panel discussion on one of the most important questions facing pharma right now:

How do you actually embed AI in a pharmaceutical organisation once you move beyond the excitement, the demos and the headlines?

Rakesh opened the session in a memorable way, using a playful “Kiss, Marry, Kill” exercise to test the room’s instincts on three vendor scenarios.

  • One vendor offered transparency: sources of truth, configurable elements, limitations and clear boundaries.

  • Another represented a more lightly resourced but workable option.

  • The third was the classic “too good to be true” AI pitch: 100% accuracy, near-instant integration, and dramatic workload reduction.

The audience response was telling. The transparent, grounded option won. The overblown promise was rejected almost unanimously.

That set the tone perfectly for the discussion that followed.

AI teams are rarely the teams people expect

Rakesh’s first question was about something many organisations underestimate: who actually ends up on the AI team?

Liz made the point that AI projects often attract a lot of enthusiasm, but enthusiasm alone does not create an effective team. In her words, people are often “voluntold”, and the danger is ending up with too many people who want to be involved but are not the right people to do the work. Her view was that you need a lean core team, while giving other interested stakeholders useful roles as ambassadors or early adopters.

I built on that by saying that AI teams are rarely just technical teams. The best ones are cross-functional by design. They do not necessarily need large numbers of AI experts, but they do need the right mix of people:

  • someone who understands the business problem

  • someone who understands compliance and regulation

  • someone who can interpret the technology

  • and someone who can help the organisation adopt new ways of working.

That last role matters more than many people realise.

Because AI only creates value when it is not just built, but understood, trusted and adopted.

The demo is not the product

Rakesh then moved us into one of the most familiar pain points in AI: the polished vendor demo that looks extraordinary… until you try to use it on your own assets.

Liz’s perspective was clear: if something looks too good to be true, it usually is. She talked about how easy it is, especially in large organisations, for senior stakeholders to be dazzled by something shiny, while the people who will actually carry the implementation burden are left with the consequences.

I added that I had, earlier in my career, effectively been on the other side of this dynamic. I had worked on polished nurture-engine demos that looked spectacular in controlled conditions. And that is really the point: demos are often built in perfect environments, with curated inputs and simplified logic.

Real pharma is not like that.

Real pharma means:

  • fragmented data

  • inconsistent systems across countries

  • imperfect prompts

  • different levels of digital maturity

  • content and workflow complexity

  • regulatory review layered throughout.

So what the demo often shows is possibility. What the organisation then has to deal with is reality.

And reality is where the cracks show.

AI loses momentum when people realise the heavy lift is organisational

Rakesh framed the next part of the conversation around the attention economy: AI is constantly in the headlines, constantly described as transformative, insane, game-changing. That creates both excitement and distraction.

My view was that there are two forces at play.

The first is the knee-jerk reaction to move quickly.
The second is the expectation of instant gratification.

People want AI to work immediately and elegantly, in the same way the demo did. But implementation does not feel like that. It takes time. It creates friction. It reveals complexity. Teams get tired.

That is when momentum starts to fade.

I argued that the only real way through this is to help the wider organisation understand what is being attempted, invest in upskilling, and build confidence. Confidence matters because momentum tends to follow it.

I also made the point that failures should not be hidden. They are often some of the most useful learning moments in the process.

Liz complemented that by bringing in the team dynamic. She spoke about the point where the team stops being associated with the exciting new thing and starts becoming the group asking awkward questions, lifting carpets and finding dead bodies underneath. That is when camaraderie, shared purpose and executive sponsorship really start to matter.

Her point was important: people need backing, especially when the work becomes inconvenient.

Pilots hide complexity. Scaling exposes it.

One of the strongest parts of the discussion came when Rakesh asked about moving beyond the pilot and dealing with the messy parts no one wants to talk about: data, security, architecture, knowledge bases, operating differences across markets.

My argument was simple:

pilots hide complexity; scaling exposes it.

A pilot usually runs in a controlled environment, with relatively clean data, a manageable use case, and a limited number of people involved. But once you try to scale AI across a real pharmaceutical organisation, the complexity appears very quickly.

That complexity sits in:

  • the data

  • the process

  • the markets

  • the architecture

  • the organisation itself.

I made the point that the heavy lifting often comes before the technology is even selected properly. Once you understand the problem and see where AI might help, the real work starts in understanding the data, the workflow, and the local market differences.

Liz added an important counterbalance: yes, you need to think ahead, especially for global rollout, different languages and different regulatory environments, but not so far ahead that you paralyse yourself. Her point was that scalability needs to be designed for, but implementation also has to remain adaptive.

That tension is real.

Accountability is often much less clear than people admit

Rakesh then took us into decision-making and accountability, which is where many cross-functional AI projects quietly struggle.

Liz described that moment most people recognise: looking around a room for “the adultier adult” who is going to make the decision, before realising there isn’t one. Her point was that part of working in these teams is accepting that sometimes you are the person expected to make the call, or at least to decide that something needs escalation.

I took a slightly more structural angle.

I said that, at a minimum, AI projects need a few key roles clearly in place:

  • an executive sponsor

  • someone who understands the problem or use case

  • someone who understands compliance and regulatory implications

  • and someone who can lead enablement and cultural change.

Without those basics, ownership becomes fuzzy and progress slows.

Successful AI needs more than AI. It needs EI.

This was probably the part of the panel that most closely reflected my own core view.

I said that if I had to coin a phrase for the day, it would be this:

successful AI needs an organisation to deploy a good level of EI — emotional intelligence.

Because this is not just a technology problem. It is a human and organisational one.

People are not neutral about AI. They have motivations, concerns, habits, fears and learning preferences. Some organisations may be tempted to view AI primarily through a cost-reduction lens. I do not think that is the right framing.

The better question is: how do we help people work better, more confidently and more effectively with these systems?

That brings capability and training into focus.

I made the point that training is often still handled badly in organisations: too generic, too passive, too disconnected from how adults actually learn. For AI, that is not good enough.

Enablement needs to reflect:

  • adult learning principles

  • different learning styles

  • neurodiversity

  • confidence building through practical application

  • and structured learning pathways rather than one-off information dumps.

Liz added a simple but important point here: communication matters. People need to feel included in the journey.

AI agents will not magically solve legacy complexity

Rakesh then moved the discussion onto agents and legacy systems.

I made a brief correction for the Star Wars fans in the room, (if you know you know), then answered more seriously: agents do not remove complexity. The important work still happens up front in understanding processes, data, country differences, and how AI fits as part of the ecosystem rather than something layered on top of it.

This was one of the areas where I was careful not to overstate my own technical depth. But the principle still stands: AI is not a magic overlay for broken systems.

Governance is not the enemy of progress

Finally, Rakesh brought the panel to governance.

Liz’s perspective was pragmatic and clear: governance is necessary, especially when multiple people and teams are doing their own thing. In enterprise environments, some overriding governance is needed, but it should not become a reason to stop entirely.

That is very close to my own view.

In pharma, governance should not be seen as the thing that blocks innovation. It should be seen as the thing that allows innovation to scale safely and coherently.

My takeaway from the panel

If I had to reduce the whole discussion to one idea, it would be this:

successful AI adoption in pharma is not really about the technology. It is about how organisations adapt.

The companies that succeed will be the ones that:

  • build the right cross-functional teams

  • ask harder questions during evaluation

  • accept that pilots are not proof of scale

  • put clear ownership in place

  • invest in capability and enablement properly

  • and embed governance without suffocating progress.

AI can accelerate analysis, synthesis and execution.

But human judgement, organisational capability and thoughtful leadership are still what determine whether it creates value.